Search Results: "tats"

22 February 2021

Russ Allbery: Review: Finder

Review: Finder, by Suzanne Palmer
Series: Finder Chronicles #1
Publisher: DAW Books
Copyright: 2019
ISBN: 0-7564-1511-X
Format: Kindle
Pages: 391
Fergus Ferguson is a repo man, or professional finder as he'd prefer. He locates things taken by people who don't own them and returns them to their owners. In this case, the thing in question is a sentient starship, and the person who stole it is Arum Gilger, a warlord in a wired-together agglomeration of space habitats and mined-out asteroids named Cernekan. Cernee, as the locals call it, is in the backwaters of human space near the Gap between the spiral arms of the galaxy. One of Fergus's first encounters in Cernee is with an old lichen farmer named Mattie Vahn who happens to take the same cable car between stations that he does. Bad luck for Fergus, since that's also why Gilger's men first disable and then blow up the cable car, leaving Mattie dead and Fergus using the auto-return feature of Mattie's crates to escape to the Vahns' home station. The Vahns are not a power in Cernee, not exactly, but they do have some important alliances and provide an inroads for Fergus to get the lay of the land and map out a plan to recover the Venetia's Sword. This is a great hook. I would happily read a whole series about an interstellar repo man, particularly one like Fergus who only works for the good guys and recovers things from petty warlords. Fergus is a thoughtful, creative loner whose style is improvised plans, unexpected tactics, and thinking on his feet rather than either bluster or force (although there is a fair bit of death in this book, some of which is gruesome). About two-thirds of the book is in roughly that expected shape. Fergus makes some local contacts, maps out the political terrain, and maneuvers himself towards his target through a well-crafted slum of wired-together habitats and working-class miners. Also, full points for the creative security system on the starship that tries to solve a nearly impossible problem (a backdoor supplementing pre-shared keys with a cultural authentication scheme that can't be vulnerable to brute force or database searches). Halfway through, though, Palmer throws a curve ball at the reader that involves the unexplained alien presence that's been lurking around the system. That part of the plot shifts focus somewhat abruptly from the local power struggle Fergus has been navigating to something far more intrusive and personal. Fergus has to both reckon with a drastic change in his life and deal with memories of his early life on an Earth drowning in climate change, his abusive childhood, and his time spent in the Martian resistance. This is also a fine topic for an SF novel, but I think Finder suffered a bit from falling between two stools. The fun competence drama of the lone repossession agent striking back against petty tyrants by taking away their toys is derailed by the sudden burst of introspection and emotional processing, but the processing is not deep or complex enough to carry the story on its own. Fergus had an awful and emotionally alienated childhood followed by some nasty trauma, to which he has responded by carefully never getting close to anyone so that he never hurts anyone who relies on him. And yet, he's a fundamentally decent person and makes friends despite himself, and from there you can probably write the rest of the arc yourself. There's nothing wrong with this as the emotional substrate of a book that's primarily focused on an action plot, but the screeching change of focus threw me off. The good news is that the end of the book returns to the bits that I liked about the first half. The mixed news is that I thought the political situation in Cernee resolved much too easily and much too straightforwardly. I would have preferred the twisty alliances to stay twisty, rather than collapse messily into a far simpler moral duality. I will also speak on behalf of all the sentient starship lovers out there and complain that the Venetia's Sword was woefully underused. It had better show up in a future volume! This unsteadiness and a few missed opportunities make Finder a good book rather than a great one, but I was still happily entertained and willing to write that off as first-novel unevenness. There are a lot of background elements left unresolved for a future volume, but Finder comes to a satisfying conclusion. Recommended if you're looking for an undemanding space action story with a quick pace and decent, if not very deep, characters. Followed by Driving the Deep. Rating: 7 out of 10

16 February 2021

Vincent Fourmond: QSoas tips and tricks: permanently storing meta-data

It is one thing to acquire and process data, but the data themselves are most often useless without the context, the conditions in which the experiments were made. These additional informations can be called meta-data. In a previous post, we have already described how one can set meta-data to data that are already loaded, and how one can make use of them. QSoas is already able to figure out some meta-data in the case of electrochemical data, most notably in the case of files acquired by GPES, ECLab or CHI potentiostats. However, only a small number of constructors are supported as of now[1], and there are a number of experimental details that the software is never going to be able to figure out for you, such as the pH, the sample, what you were doing... The new version of QSoas provides a means to permanently store meta-data for experimental data files:
QSoas> record-meta pH 7 file.dat
This command uses record-meta to permanently store the information pH = 7 for the file file.dat. Any time QSoas loads the file again, either today or in one year, the meta-data will contain the value 7 for the field pH. Behind the scenes, QSoas creates a single small file, file.dat.qsm, in which the meta-data are stored (in the form of a JSON dictionnary). You can set the same meta-data to many files in one go, using wildcards (see load for more information). For instance, to set the pH=7 meta-data to all the .dat files in the current directory, you can use:
QSoas> record-meta pH 7 *.dat
You can only set one meta-data for each call to record-meta, but you can use it as many times as you like. Finally, you can use the /for-which option to load or browse to select only the files which have the meta you need:
QSoas> browse /for-which=$meta.pH<=7
This command browses the files in the current directory, showing only the ones that have a pH meta-data which is 7 or below.

[1] I'm always ready to implement the parsing of other file formats that could be useful for you. If you need parsing of special files, please contact me, sending the given files and the meta-data you'd expect to find in those. About QSoas QSoas is a powerful open source data analysis program that focuses on flexibility and powerful fitting capacities. It is released under the GNU General Public License. It is described in Fourmond, Anal. Chem., 2016, 88 (10), pp 5050 5052. Current version is 3.0. You can download its source code there (or clone from the GitHub repository) and compile it yourself, or buy precompiled versions for MacOS and Windows there.

15 February 2021

Rapha&#235;l Hertzog: Freexian s report about Debian Long Term Support, January 2020

A Debian LTS logo Like each month, have a look at the work funded by Freexian s Debian LTS offering. Debian project funding In January, we put aside 2175 EUR to fund Debian projects. As part of this Carles Pina i Estany started to work on better no-dsa support for the PTS which recently resulted in two merge requests which will hopefully be deployed soon. We re looking forward to receive more projects from various Debian teams! Learn more about the rationale behind this initiative in this article. Debian LTS contributors In January, 13 contributors have been paid to work on Debian LTS, their reports are available: Evolution of the situation In January we released 28 DLAs and held our first LTS team meeting for 2021 on IRC, with the next public IRC meeting coming up at the end of March. During that meeting Utkarsh shared that after he rolled out the python-certbot update (on December 8th 2020) the maintainer told him: I just checked with Let s Encrypt, and the stats show that you just saved 142,500 people from having their certificates start failing next month. I didn t know LTS was still that used!

Finally, we would like to welcome sipgate GmbH as a new silver sponsor. Also remember that we are constantly looking for new contributors. Please contact Holger if you are interested. The security tracker currently lists 43 packages with a known CVE and the dla-needed.txt file has 23 packages needing an update. Thanks to our sponsors Sponsors that joined recently are in bold.

10 February 2021

Dirk Eddelbuettel: RcppSMC 0.2.3 on CRAN: Updated Snapshot

A new release 0.2.3 of the RcppSMC package arrived on CRAN earlier today. Once again it progressed as a very quick pretest-publish within minutes of submission thanks CRAN! RcppSMC provides Rcpp-based bindings to R for the Sequential Monte Carlo Template Classes (SMCTC) by Adam Johansen described in his JSS article. Sequential Monte Carlo is also referred to as Particle Filter in some contexts. This release somewhat belatedly merges a branch Leah had been working on and which we all realized is ready . We now have a good snapshot to base new work on, as maybe with the Google Summer of Code 2021.

Changes in RcppSMC version 0.2.3 (2021-02-10)
  • Addition of a Github Action CI runner (Dirk)
  • Switching to inheritance for the moveset rather than pointers to functions (Leah in #45).

Courtesy of my CRANberries, there is a diffstat report for this release. More information is on the RcppSMC page. Issues and bugreports should go to the GitHub issue tracker. If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

5 January 2021

Russell Coker: Planet Linux Australia

Linux Australia have decided to cease running the Planet installation on planet.linux.org.au. I believe that blogging is still useful and a web page with a feed of Australian Linux blogs is a useful service. So I have started running a new Planet Linux Australia on https://planet.luv.asn.au/. There has been discussion about getting some sort of redirection from the old Linux Australia page, but they don t seem able to do that. If you have a blog that has a reasonable portion of Linux and FOSS content and is based in or connected to Australia then email me on russell at coker.com.au to get it added. When I started running this I took the old list of feeds from planet.linux.org.au, deleted all blogs that didn t have posts for 5 years and all blogs that were broken and had no recent posts. I emailed people who had recently broken blogs so they could fix them. It seems that many people who run personal blogs aren t bothered by a bit of downtime. As an aside I would be happy to setup the monitoring system I use to monitor any personal web site of a Linux person and notify them by Jabber or email of an outage. I could set it to not alert for a specified period (10 mins, 1 hour, whatever you like) so it doesn t alert needlessly on routine sysadmin work and I could have it check SSL certificate validity as well as the basic page header.

31 December 2020

Chris Lamb: Free software activities in December 2020

Here is my monthly update covering what I have been doing in the free software world during December 2020 (previous month):

Reproducible Builds One of the original promises of open source software is that distributed peer review and transparency of process results in enhanced end-user security. However, whilst anyone may inspect the source code of free and open source software for malicious flaws, almost all software today is distributed as pre-compiled binaries. This allows nefarious third-parties to compromise systems by injecting malicious code into ostensibly secure software during the various compilation and distribution processes. The motivation behind the Reproducible Builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised. This month, I: I also made the following changes to diffoscope, our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues, including releasing version 163:

Debian Uploads I also sponsored an upload of adminer (4.7.8-2) on behalf of Alexandre Rossi and performed two QA uploads of sendfile (2.1b.20080616-7 and 2.1b.20080616-8) to make the build the build reproducible (#776938) and to fix a number of other unrelated issues. Debian LTS This month I have worked 18 hours on Debian Long Term Support (LTS) and 12 hours on its sister Extended LTS project. You can find out more about the Debian LTS project via the following video:

Shirish Agarwal: Pandemic, Informal economy and Khau Galli.

Formal sector issues Just today it was published that output from eight formal sectors of the economy who make the bulk of the Indian economy were down on a month to month basis . This means all those apologists for the Government who said that it was ok that the Govt. didn t give the 20 lakh crore package which was announced. In fact, a businessman from my own city, a certain Prafull Sarda had asked via RTI what happened to the 20 lakh crore package which was announced? The answers were in all media as well as newspapers but on the inside pages. You can see one of the article sharing the details here. No wonder Vivek Kaul also shared his take on the way things will hopefully go for the next quarter which seems to be a tad optimistic from where we are atm.
Eight Sectors declining in Indian Economy month-on-month CNBC TV 18
The Informal economy The informal economy has been strangulated by the current party in power since it came into power. And this has resulted many small businesses which informal are and were part of culture of cities and towns. I share an article from 2018 which only shows how good and on the mark it has aged in the last two years. The damage is all to real to ignore as I would share more of an anecdotes and experiences because sadly there never has been any interest shown especially by GOI to seek any stats. about informal economy. Although CMIE has done some good work in that even though they majorly look at formal, usually blue-collar work where again there is not good data. Sharing an anecdote and a learning from these small businesses which probably an MBA guy wouldn t know and in all honesty wouldn t even care. Khau galli Few years back, circa 2014 and on wards, when the present Govt. came into power, it did come with lot of promises. One of which was that lot of informal businesses would be encouraged to grow their businesses and in time hopefully, they become part of the formal economy. Or at least that was the story that all of us were told. Due to that they did lot of promises and also named quite a few places where street food was in abundance. Such lanes were named Khau galli for those who are from North India, it was be easily known and understood. This was just saying that here are some places where you could get a variety of food without paying obscene prices as you would have to vis-a-vis a restaurant. Slowly, they raised the rates of inputs (food grains), gas cylinder etc. which we know of as food inflation and via GST made sure that the restaurants were able to absorb some of the increased inputs (input credit) while still being more than competitive to the street food person/s. The restaurant F&B model is pretty well known so not going there at all. It is however, important to point out that they didn t make any new khau gallis or such, most or all the places existed for years and even decades before that. They also didn t take any extra effort either in marketing the khau gallis or get them with chefs or marketing folks so that the traditional can marry to the anew. They just named them, that was the only gain to be seen on the ground. In its heyday, the khau galli near my home used to have anywhere between 20-30 Thelas or food carts. Most of the food carts would be of wood and having very limited steel. Such food carts would cost anywhere around INR 15-20k instead of the food cart you see here. The only reason I shared that link is to show how a somewhat typical thela or food cart looks in India. Of course YouTube or any other media platform would show many. On top of it, you need and needed permission from the municipality a license for the same which would be auctioned. Now that license could well run from thousands into lakhs depending on various factors or you gave something to the Municipal worker when he did his rounds/beat much like a constable every day or week. Apart from those, you also have raw material expenditure which could easily run into few thousands depending upon what sort of food you are vending. You also would typically have 2-3 workers so a typical Thela would feed not only its customers but also 2-3 families who are the laborer families as well as surrounding businesses. As I used to be loyal and usually go to few whom I found to be either tasty or somehow they were good for me. In either case, a relationship was formed. As I have been never fond of crowds, I usually used to in their off-beat hours either when they are close to packing up of when I know they usually have a lull. That way I knew I would get complete attention of the vendor/s. Many a times I used to see money change hands between the vendors themselves and used to see both camaraderie as well as competition between them. This is years ago, once while sharing a chai (tea) with one of the street vendors I casually asked I have often seen you guys exchanging money with each other and most of the time quite a bit of the money is also given to the guy who didn t make that much sales or any sales at all. The vendor replied sharing practical symbiotic knowledge. All of us are bound by a single thing called poverty. All of us are struggle. Do you know why so many people come here, because they know that there would be a variety of food to be had. Now if we stopped helping each other, the number of people who would make the effort would be also less, we know we are not the only game in town. Also whatever we give, sooner or later it gets adjusted. Also if one of us has good days, he knows hardship days are not far. Why, simply because people change tastes or want variety. So irrespective of good or bad the skills of the vendor is, she or he is bound to make some sales. The vendor either shares the food with us or whatever. Somehow these things just work out. And that doesn t mean we don t have our fights, we do have our fights, but we also understand this. Now you see this and you understand that these guys have and had a community. Even if they changed places due to one reason or the other, they kept themselves connected. Unlike many of us, who even find a hard time keeping up with friends let alone relatives. Now cut to 2020, and where there used to be 20-30 thelas near my home, there are only 4-5. Of course, multiple reasons, but one of the biggest was of course demonetization. That was a huge shock to which many of thela walas succumbed. Their entire savings and capital were turned to dust. Many of their customers will turn up with either a INR 500 or INR 2000/- Re note where at the most a dish costed INR 100/- most times half or even 1/3rd of that amount. How and from where the thela walas could get that kind of cash. These are people who only if they earn, they and their family will have bread at night. Most of the loose change was tied up at middle to higher tier restaurants where they were giving between INR 20/- 30/- for every INR 100/- change of rupees and coins. Quite a few bankers made money by that as well as other means where the thela walas just could not compete. These guys also didn t have any black money even though they were and are part of the black/informal economy. Sadly, till date no economist or even sociologist as far as I know has attempted or done any work from what I know on this industry. If you want to formalize such businesses then at the very least understand their problems and devise solutions. And I suspect, what is and has happened near my house has also happened everywhere else, at least within the geographical confines of the Indian state. Whether it was the 2016 demonetization or the pandemic, the results and effects have been similar the same all over. Some states did do well and still do, the suffering still continues. With the hope that the new year brings cheer to you as well some more ideas to remain in business by the thela walas, I bid you adieu and see you in new year

5 December 2020

Thorsten Alteholz: My Debian Activities in November 2020

FTP master Unfortunately a day only has 24h. As the freeze is approaching, I had to concentrate a bit more on keeping my packages in shape. So this month I only accepted nine packages. The good news, I rejected no package. The overall number of packages that got accepted was 328. Debian LTS This was my seventy-seventh month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. This month my all in all workload has been 22.75h. During that time I did LTS uploads of: I also started to work on x11vnc and slirp. Last but not least I did some days of frontdesk duties. Debian ELTS This month was the twenty ninth ELTS month. During my allocated time I uploaded: Unfortunately I also had to give back some hours. Last but not least I did some days of frontdesk duties. Other stuff This month I uploaded new upstream versions of: I fixed one or two bugs in: I improved packaging of: and there have been even some new packages: As it is again this time of the year, I would also like to draw some attention to the Debian Med Advent Calendar. Like the past years, the Debian Med team starts a bug squashing event from the December 1st to 24th. Every bug that is closed will be registered in the calendar. So instead of taking something from the calendar, this special one will be filled and at Christmas hopefully every Debian Med related bug is closed. Don t hesitate, start to squash :-). The announcement on the mailing list can be found here.

11 November 2020

Vincent Fourmond: Solution for QSoas quiz #1: averaging spectra

This post describes the solution to the Quiz #1, based on the files found there. The point is to produce both the average and the standard deviation of a series of spectra. Below is how the final averaged spectra shoud look like:
I will present here two different solutions. Solution 1: using the definition of standard deviation There is a simple solution using the definition of the standard deviation: $$\sigma_y = \sqrt <y^2> - <y> ^2 $$ in which \(<y^2>\) is the average of \(y^2\) (and so on). So the simplest solution is to construct datasets with an additional column that would contain \(y^2\), average these columns, and replace the average with the above formula. For that, we need first a companion script that loads a single data file and adds a column with \(y^2\). Let's call this script load-one.cmds:
load $ 1 
apply-formula y2=y**2 /extra-columns=1
flag /flags=processed
When this script is run with the name of a spectrum file as argument, it loads it (replaces $ 1 by the first argument, the file name), adds a column y2 containing the square of the y column, and flag it with the processed flag. This is not absolutely necessary, but it makes it much easier to refer to all the spectra when they are processed. Then to process all the spectra, one just has to run the following commands:
run-for-each load-one.cmds Spectrum-1.dat Spectrum-2.dat Spectrum-3.dat
average flagged:processed
apply-formula y2=(y2-y**2)**0.5
dataset-options /yerrors=y2
The run-for-each command runs the load-one.cmds script for all the spectra (one could also have used Spectra-*.dat to not have to give all the file names). Then, the average averages the values of the columns over all the datasets. To be clear, it finds all the values that have the same X (or very close X values) and average them, column by column. The result of this command is therefore a dataset with the average of the original \(y\) data as y column and the average of the original \(y^2\) data as y2 column. So now, the only thing left to do is to use the above equation, which is done by the apply-formula code. The last command, dataset-options, is not absolutely necessary but it signals to QSoas that the standard error of the y column should be found in the y2 column. This is now available as script method-one.cmds in the git repository.

Solution 2: use QSoas's knowledge of standard deviation The other method is a little more involved but it demonstrates a good approach to problem solving with QSoas. The starting point is that, in apply-formula, the value $stats.y_stddev corresponds to the standard deviation of the whole y column... Loading the spectra yields just a series of x,y datasets. We can contract them into a single dataset with one x column and several y columns:
load Spectrum-*.dat /flags=spectra
contract flagged:spectra
After these commands, the current dataset contains data in the form of:
lambda1	a1_1	a1_2	a1_3
lambda2	a2_1	a2_2	a2_3
...
in which the ai_1 come from the first file, ai_2 the second and so on. We need to use transpose to transform that dataset into:
0	a1_1	a2_1	...
1	a1_2	a2_2	...
2	a1_3	a2_3	...
In this dataset, values of the absorbance for the same wavelength for each dataset is now stored in columns. The next step is just to use expand to obtain a series of datasets with the same x column and a single y column (each corresponding to a different wavelength in the original data). The game is now to replace these datasets with something that looks like:
0	a_average
1	a_stddev
For that, one takes advantage of the $stats.y_average and $stats.y_stddev values in apply-formula, together with the i special variable that represents the index of the point:
apply-formula "if i == 0; then y=$stats.y_average; end; if i == 1; then y=$stats.y_stddev; end"
strip-if i>1
Then, all that is left is to apply this to all the datasets created by expand, which can be just made using run-for-datasets, and then, we reverse the splitting by using contract and transpose ! In summary, this looks like this. We need two files. The first, process-one.cmds contains the following code:
apply-formula "if i == 0; then y=$stats.y_average; end; if i == 1; then y=$stats.y_stddev; end"
strip-if i>1
flag /flags=processed
The main file, method-two.cmds looks like this:
load Spectrum-*.dat /flags=spectra
contract flagged:spectra
transpose
expand /flags=tmp
run-for-datasets process-one.cmds flagged:tmp
contract flagged:processed
transpose
dataset-options /yerrors=y2
Note some of the code above can be greatly simplified using new features present in the upcoming 3.0 version, but that is the topic for another post. About QSoasQSoas is a powerful open source data analysis program that focuses on flexibility and powerful fitting capacities. It is released under the GNU General Public License. It is described in Fourmond, Anal. Chem., 2016, 88 (10), pp 5050 5052. Current version is 2.2. You can download its source code and compile it yourself or buy precompiled versions for MacOS and Windows there.

24 October 2020

Jelmer Vernooij: Debian Janitor: Hosters used by Debian packages

The Debian Janitor is an automated system that commits fixes for (minor) issues in Debian packages that can be fixed by software. It gradually started proposing merges in early December. The first set of changes sent out ran lintian-brush on sid packages maintained in Git. This post is part of a series about the progress of the Janitor. The Janitor knows how to talk to different hosting platforms. For each hosting platform, it needs to support the platform- specific API for creating and managing merge proposals. For each hoster it also needs to have credentials. At the moment, it supports the GitHub API, Launchpad API and GitLab API. Both GitHub and Launchpad have only a single instance; the GitLab instances it supports are gitlab.com and salsa.debian.org. This provides coverage for the vast majority of Debian packages that can be accessed using Git. More than 75% of all packages are available on salsa - although in some cases, the Vcs-Git header has not yet been updated. Of the other 25%, the majority either does not declare where it is hosted using a Vcs-* header (10.5%), or have not yet migrated from alioth to another hosting platform (9.7%). A further 2.3% are hosted somewhere on GitHub (2%), Launchpad (0.18%) or GitLab.com (0.15%), in many cases in the same repository as the upstream code. The remaining 1.6% are hosted on many other hosts, primarily people s personal servers (which usually don t have an API for creating pull requests). Packages per hoster

Outdated Vcs-* headers It is possible that the 20% of packages that do not have a Vcs-* header or have a Vcs header that say there on alioth are actually hosted elsewhere. However, it is hard to know where they are until a version with an updated Vcs-Git header is uploaded. The Janitor primarily relies on vcswatch to find the correct locations of repositories. vcswatch looks at Vcs-* headers but has its own heuristics as well. For about 2,000 packages (6%) that still have Vcs-* headers that point to alioth, vcswatch successfully finds their new home on salsa.
Merge Proposals by Hoster These proportions are also visible in the number of pull requests created by the Janitor on various hosters. The vast majority so far has been created on Salsa.
Hoster Open Merged & Applied Closed
github.com921685
gitlab.com1230
code.launchpad.net24511
salsa.debian.org1,3605,657126
Merge Proposal statistics In this graph, Open means that the pull request has been created but likely nobody has looked at it yet. Merged means that the pull request has been marked as merged on the hoster, and applied means that the changes have ended up in the packaging branch but via a different route (e.g. cherry-picked or manually applied). Closed means that the pull request was closed without the changes being incorporated. Note that this excludes ~5,600 direct pushes, all of which were to salsa-hosted repositories. See also:

For more information about the Janitor's lintian-fixes efforts, see the landing page.

14 October 2020

Thomas Goirand: The Gnocchi package in Debian

This is a follow-up from the blog post of Russel as seen here: https://etbe.coker.com.au/2020/10/13/first-try-gnocchi-statsd/. There s a bunch of things he wrote which I unfortunately must say is inaccurate, and sometimes even completely wrong. It is my point of view that none of the reported bugs are helpful for anyone that understand Gnocchi and how to set it up. It s however a terrible experience that Russell had, and I do understand why (and why it s not his fault). I m very much open on how to fix this on the packaging level, though some things aren t IMO fixable. Here s the details. 1/ The daemon startups First of all, the most surprising thing is when Russell claimed that there s no startup scripts for the Gnocchi daemons. In fact, they all come with both systemd and sysv-rc support: # ls /lib/systemd/system/gnocchi-api.service
/lib/systemd/system/gnocchi-api.service
# /etc/init.d/gnocchi-api
/etc/init.d/gnocchi-api Russell then tried to start gnocchi-api without the good options that are set in the Debian scripts, and not surprisingly, this failed. Russell attempted to do what was in the upstream doc, which isn t adapted to what we have in Debian (the upstream doc is probably completely outdated, as Gnocchi is unfortunately not very well maintained upstream). The bug #972087 is therefore, IMO not valid. 2/ The database setup By default for all things OpenStack in Debian, there are some debconf helpers using dbconfig-common to help users setup database for their services. This is clearly for beginners, but that doesn t prevent from attempting to understand what you re doing. That is, more specifically for Gnocchi, there are 2 databases: one for Gnocchi itself, and one for the indexer, which not necessarily is using the same backend. The Debian package already setups one database, but one has to do it manually for the indexer one. I m sorry this isn t well enough documented. Now, if some package are supporting sqlite as a backend (since most things in OpenStack are using SQLAlchemy), it looks like Gnocchi doesn t right now. This is IMO a bug upstream, rather than a bug in the package. However, I don t think the Debian packages are to be blame here, as they simply offer a unified interface, and it s up to the users to know what they are doing. SQLite is anyway not a production ready backend. I m not sure if I should close #971996 without any action, or just try to disable the SQLite backend option of this package because it may be confusing. 3/ The metrics UUID Russell then thinks the UUID should be set by default. This is probably right in a single server setup, however, this wouldn t work setting-up a cluster, which is probably what most Gnocchi users will do. In this type of environment, the metrics UUID must be the same on the 3 servers, and setting-up a random (and therefore different) UUID on the 3 servers wouldn t work. So I m also tempted to just close #972092 without any action on my side. 4/ The coordination URL Since Gnocchi is supposed to be setup with more than one server, as in OpenStack, having an HA setup is very common, then a backend for the coordination (ie: sharing the workload) must be set. This is done by setting an URL that tooz understand. The best coordinator being Zookeeper, something like this should be set by hand: coordination_url=zookeeper://192.168.101.2:2181/ Here again, I don t think the Debian package is to be blamed for not providing the automation. I would however accept contributions to fix this and provide the choice using debconf, however, users would still need to understand what s going on, and setup something like Zookeeper (or redis, memcache, or any other backend supported by tooz) to act as coordinator. 5/ The Debconf interface cannot replace a good documentation and there s not so much I can do at my package maintainer level for this. Russell, I m really sorry for the bad user experience you had with Gnocchi. Now that you know a little big more about it, maybe you can have another go? Sure, the OpenStack telemetry system isn t an easy to understand beast, but it s IMO worth trying. And the recent versions can scale horizontally

12 October 2020

Russell Coker: First Attempt at Gnocchi-Statsd

I ve been investigating the options for tracking system statistics to diagnose performance problems. The idea is to track all sorts of data about the system (network use, disk IO, CPU, etc) and look for correlations at times of performance problems. DataDog is pretty good for this but expensive, it s apparently based on or inspired by the Etsy Statsd. It s claimed that the gnocchi-statsd is the best implementation of the protoco used by the Etsy Statsd, so I decided to install that. I use Debian/Buster for this as that s what I m using for the hardware that runs KVM VMs. Here is what I did:
# it depends on a local MySQL database
apt -y install mariadb-server mariadb-client
# install the basic packages for gnocchi
apt -y install gnocchi-common python3-gnocchiclient gnocchi-statsd uuid
In the Debconf prompts I told it to setup a database and not to manage keystone_authtoken with debconf (because I m not doing a full OpenStack installation). This gave a non-working configuration as it didn t configure the MySQL database for the [indexer] section and the sqlite database that was configured didn t work for unknown reasons. I filed Debian bug #971996 about this [1]. To get this working you need to edit /etc/gnocchi/gnocchi.conf and change the url line in the [indexer] section to something like the following (where the password is taken from the [database] section).
url = mysql+pymysql://gnocchi-common:PASS@localhost:3306/gnocchidb
To get the statsd interface going you have to install the gnocchi-statsd package and edit /etc/gnocchi/gnocchi.conf to put a UUID in the resource_id field (the Debian package uuid is good for this). I filed Debian bug #972092 requesting that the UUID be set by default on install [2]. Here s an official page about how to operate Gnocchi [3]. The main thing I got from this was that the following commands need to be run from the command-line (I ran them as root in a VM for test purposes but would do so with minimum privs for a real deployment).
gnocchi-api
gnocchi-metricd
To communicate with Gnocchi you need the gnocchi-api program running, which uses the uwsgi program to provide the web interface by default. It seems that this was written for a version of uwsgi different than the one in Buster. I filed Debian bug #972087 with a patch to make it work with uwsgi [4]. Note that I didn t get to the stage of an end to end test, I just got it to basically run without error. After getting gnocchi-api running (in a terminal not as a daemon as Debian doesn t seem to have a service file for it), I ran the client program gnocchi and then gave it the status command which failed (presumably due to the metrics daemon not running), but at least indicated that the client and the API could communicate. Then I ran the gnocchi-metricd and got the following error:
2020-10-12 14:59:30,491 [9037] ERROR    gnocchi.cli.metricd: Unexpected error during processing job
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/gnocchi/cli/metricd.py", line 87, in run
    self._run_job()
  File "/usr/lib/python3/dist-packages/gnocchi/cli/metricd.py", line 248, in _run_job
    self.coord.update_capabilities(self.GROUP_ID, self.store.statistics)
  File "/usr/lib/python3/dist-packages/tooz/coordination.py", line 592, in update_capabilities
    raise tooz.NotImplemented
tooz.NotImplemented
At this stage I ve had enough of gnocchi. I ll give the Etsy Statsd a go next. Update Thomas has responded to this post [5]. At this stage I m not really interested in giving Gnocchi another go. There s still the issue of the indexer database which should be different from the main database somehow and sqlite (the config file default) doesn t work. I expect that if I was to persist with Gnocchi I would encounter more poorly described error messages from the code which either don t have Google hits when I search for them or have Google hits to unanswered questions from 5+ years ago. The Gnocchi systemd config files are in different packages to the programs, this confused me and I thought that there weren t any systemd service files. I had expected that installing a package with a daemon binary would also get the systemd unit file to match. The cluster features of Gnocchi are probably really good if you need that sort of thing. But if you have a small instance (EG a single VM server) then it s not needed. Also one of the original design ideas of the Etsy Statsd was that UDP was used because data could just be dropped if there was a problem. I think for many situations the same concept could apply to the entire stats service. If the other statsd programs don t do what I need then I may give Gnocchi another go.

6 October 2020

Iustin Pop: Late report for Nationalpark Bike Marathon 2020

I don t have to mention that 2020 is a special year, so all the normal race plan was out the window, and I was very happy and fortunate to be able to do even one race. And only delayed 3 weeks to write this race report :/ So, here s the story

Preparing for the race Because it was a special year, and everything was crazy, I actually managed to do more sports than usual, at least up to end of July. So my fitness, and even body weight, was relatively fine, so I subscribed to the mid-distance race (official numbers: 78km distance, 1570 meters altitude), and then off it went to a proper summer vacation in a hotel, even. And while I did do some bike rides during that vacation, from then on my training regime went just off? I did train, I did ride, I did get significant PRs, but it didn t click anymore. Plus, due to well, actually not sure what, work or coffee or something my sleep regime also got completely ruined On top of that, I didn t think about the fact that the race was going to be mid-September, and that high up in the mountains, the weather could have be bad enough (I mean, in 2018 the weather was really bad even in August ) such that I d need to seriously think about clothing.

Race week I arrive in Scuol two days before the race, very tired (I think I got only 6 hours of sleep the night before), and definitely not in a good shape. I was feeling bad enough that I was not quite sure I was going to race. At least weather was OK, such that normal summer clothing would suffice. But the race info was mentioning dangerous segments, to be very careful, etc. etc. so I was quite anxious. Note 1: my wife says, this was not the first time, and likely not the last time that two days before the race I feel like quitting. And as I m currently on-and-off reading the interesting The Brave Athlete: Calm the Fuck Down and Rise to the Occasion book (by Lesley Paterson and Simon Marshall; it s an interesting book, not sure if I recommend it or not), I am beginning to think that this is my reaction to races where I have overshot my usual distance. Or, in general, races where I fear the altitude gain. Not quite sure, but I think it is indeed the actual cause. So I spend Thursday evening feeling unwell, and thinking I ll see how Friday goes. Friday comes, and having slept reasonably well entire night, I pick up my race number, then I take another nap in the afternoon - in total, I ve slept around 13 hours that day. So I felt much better, and was looking forward to the race. Saturday morning comes, I manage to wake up early, and get ready in time; almost didn t panic at all that I m going to be late. Note 2: my wife also says that this is the usual way I behave. Hence, it must be most of it a mental issue, rather than real physical one

Race I reach the train station in time, I get on the train, and by the time the train reached Zernez, I fully calm down. There was am entire hour wait though before the race, and it was quite chilly. Of course I didn t bring anything beside what I was wearing, relying on temperature getting better later in the day. During the wait, there were two interesting things happening. First, we actually got there (in Zernez) before the first people from the long distance passed by, both men and women. Seeing them pass by was cool, thinking they already had ~1 200m altitude in just 30-ish kilometres. The second thing was, as this was the middle and not the shortest distance, the people in the group looked differently than in previous years. More precisely, they were looking very fit, and I was feeling fat. Well, I am overweight, so it was expected, but I was feeling it even more than usual. I think only one or two in ten people were looking as fit as me or less And of course, the pictures post-race show me even less fit-looking than I thought. Ah, self-deception is a sweet thing And yes, we all had to wear masks, up until the last minute. It was interesting, but not actually annoying - and small enough price for being able to race! Then the race starts, and as opposed to many other years, it starts slow. I didn t feel that rush of people starting fast, it was reasonable?

First part of the race (good) Thus started the first part of the race, on a new route that I was unfamiliar with. There was not too much climbing, to be honest, and there was some tricky single-trail through the woods, with lots of the roots. I actually had to get off the bike and push it, since it was too difficult to pedal uphill on that path. Other than that, I was managing so far to adjust my efforts well enough that my usual problems related to climbing (lower back pain) didn t yet appear, even as the overall climbed meters were increasing. I was quite happy at that, and had lots of reserves. To my (pleasant) surprise, two positive things happened:
  • I was never alone, a sign that I wasn t too far back.
  • I was passing/being passed by people, both on climbs but also on descents! It s rare, but I did overtake a few people on a difficult trail downhill.
With all the back and forth, a few people became familiar (or at least their kit), and it was fun seeing who is better uphill vs. downhill.

And second part (not so good) I finally get to (around) S-chanf, on a very nice but small descent, and on flat roads, and start the normal route for the short race. Something was off though - I knew from past years that these last ~47km have around 700-800m altitude, but I had already done around 1000m. So the promised 1571m were likely to be off, by at least 100-150m. I set myself a new target of 1700m, and adjust my efforts based on that. And then, like clockwork on the 3:00:00 mark, the route exited the forest, the sun got out of the clouds, and the temperature started to increase from 16-17 C to 26 +, with peaks of 31 C. I m not joking: at 2:58:43, temp was 16 , at 3:00:00, it was 18 , at 3:05:45, it was 26 . Heat and climbing are my two nemeses, and after having a pretty good race for the first 3 hours and almost exactly 1200m of climbing, I started feeling quite miserable. Well, it was not all bad. There were some nice stretches of flat, where I knew I can pedal strongly and keep up with other people, until my chain dropped, so I had to stop, re-set it, and lose 2 minutes. Sigh. But, at least, I was familiar with this race, or so I thought. I completely mis-remembered the last ~20km as a two-punch climb, Guarda and Ftan, whereas it is actually a three-punch one: Guarda, Ardez, and only then Ftan. Doesn t help that Ardez has the nice ruins that I was remembering and which threw me off. The saddest part of the day was here, on one of the last climbs - not sure if to Guarda or to Arddez, where a guy overtakes me, and tells me he s glad he finally caught up with me, he almost got me five or six times (!), but I always managed to break off. Always, until now. Now, this was sad (I was huffing and puffing like a steam locomotive now), but also positive, as I never had that before. One good, one bad? And of course, it was more than 1 700m altitude, it was 1 816m. And the descent to Scuol shorter and it didn t end as usual with the small but sharp climb which I just love, due to Covid changes. But, I finished, and without any actual issues, and no dangerous segments as far as I saw. I was anxious for no good reason

Conclusion (or confusion?) So this race was interesting: three hours (to the minute) in which I went 43.5km, climbed 1200m, felt great, and was able to push and push. And then the second part, only ~32km, climbed only 600m, but which felt quite miserable. I don t know if it was mainly heat, mainly my body giving up after that much climbing (or time?), or both. But it s clear that I can t reliably race for more than around these numbers: 3 hours, ~1000+m altitude, in >20 C temperature. One thing that I managed to achieve though: except due to the technically complex trail at the beginning where I pushed the bike, I did not ever stop and push the bike uphill because I was too tired. Instead, I managed (badly) to do the switch sitting/standing as much as I could motivate myself, and thus continue pushing uphill. This is an achievement for me, since mentally it s oh so easy to stop and push the bike, so I was quite glad. As to the race results, they were quite atrocious:
  • age category (men), 38 out of 52 finishers, 4h54m, with first finisher doing 3h09m, so 50% slower (!)
  • overall (men), 138 out of 173 finishers, with first finisher 2h53m.
These results clearly don t align with my feeling of a good first half of the race, so either it was purely subjective, or maybe in this special year, only really strong people registered for the race, or something else One positive aspect though, compared to most other years, was the consistency of my placement (age and overall):
  • Zuoz: 38 / 141
  • S-Chanf: 39 / 141
  • Zernez: 39 / 141
  • Guarda: 38 / 138
  • Ftan: 38 / 138
  • ( next - whatever this is): 38 / 138
  • Finish: 38 / 138
So despite all my ranting above, and all the stats I m pulling out of my own race, it looks like my position in the race was fully settled in the really first part, and I didn t gain nor lose practically anything afterwards. I did dip one place but then gained it back (on the climb to Guarda, even). The split times (per-segment rankings) are a bit more variable, and show that I was actually fast on the climbs but losing speed on the descents, which I really don t understand anymore:
  • Zernez-Zuoz (unclear type): 38 / 141
  • Zuoz-S-Chanf (unclear type): 40 / 141
  • S-Chanf-Zernez (mostly downhill): 39 / 143
  • Zernez-Guarda (mostly uphill): 37 / 136
  • Guarda-Ftan (mostly uphill): 37 / 131
  • Ftan-Scuol (mostly downhill): 43 / 156
The difference at the end is striking. I m visually matching the map positions to km and then use VeloViewer for computing the altitude gain, but Zernez to Guarda is 420m altitude, and Guarda to Ftan is 200m altitude gain, and yet on both, I was faster than my final place, and by quite a few places on overall, only to lose that on the descent (Ftan-Scuol), and by a large margin. So, amongst all the confusion here, I think the story overall is:
  • indeed I was quite fit for me, so the climbs were better than my place in the race (if that makes sense).
  • however, I m not actually good at climbing nor fit (watts/kg), so I m still way back in the pack (oops!).
  • and I do suck at descending, both me (skills) and possible my bike setup as well (too high tyre pressure, etc.) so I lose even more time here
As usual, the final take-away points are: lose the extra weight that is not needed, get better skills, get better core to be better at climbing. I ll finish here with one pic, taken in Guarda (4 hours into the race, more or less):
Climbing in Guarda Climbing in Guarda
Until next year!

2 October 2020

Ian Jackson: Mailman vs DKIM - a novel solution

tl;dr: Do not configure Mailman to replace the mail domains in From: headers. Instead, try out my small new program which can make your Mailman transparent, so that DKIM signatures survive. Background and narrative DKIM NB: This explanation is going to be somewhat simplified. I am going to gloss over some details and make some slightly approximate statements. DKIM is a new anti-spoofing mechanism for Internet email, intended to help fight spam. DKIM, paired with the DMARC policy system, has been remarkably successful at stemming the flood of joe-job spams. As usually deployed, DKIM works like this: When a message is originally sent, the author's MUA sends it to the MTA for their From: domain for outward delivery. The From: domain mailserver calculates a cryptographic signature of the message, and puts the signature in the headers of the message. Obviously not the whole message can be signed, since at the very least additional headers need to be added in transit, and sometimes headers need to be modified too. The signing MTA gets to decide what parts of the message are covered by the signature: they nominate the header fields that are covered by the signature, and specify how to handle the body. A recipient MTA looks up the public key for the From: domain in the DNS, and checks the signature. If the signature doesn't match, depending on policy (originator's policy, in the DNS, and recipient's policy of course), typically the message will be treated as spam. The originating site has a lot of control over what happens in practice. They get to publish a formal (DMARC) policy in the DNS which advises recipients what they should do with mails claiming to be from their site. As mentioned, they can say which headers are covered by the signature - including the ability to sign the absence of a particular headers - so they can control which headers downstreams can get away with adding or modifying. And they can set a normalisation policy, which controls how precisely the message must match the one that they sent. Mailman Mailman is, of course, the extremely popular mailing list manager. There are a lot of things to like about it. I choose to run it myself not just because it's popular but also because it provides a relatively competent web UI and a relatively competent email (un)subscription interfaces, decent bounce handling, and a pretty good set of moderation and posting access controls. The Xen Project mailing lists also run on mailman. Recently we had some difficulties with messages sent by Citrix staff (including myself), to Xen mailing lists, being treated as spam. Recipient mail systems were saying the DKIM signatures were invalid. This was in fact true. Citrix has chosen a fairly strict DKIM policy; in particular, they have chosen "simple" normalisation - meaning that signed message headers must precisely match in syntax as well as in a semantic sense. Examining the the failing-DKIM messages showed that this was definitely a factor. Applying my Opinions about email My Bayesian priors tend to suggest that a mail problem involving corporate email is the fault of the corporate email. However in this case that doesn't seem true to me. My starting point is that I think mail systems should not not modify messages unnecessarily. None of the DKIM-breaking modifications made by Mailman seemed necessary to me. I have on previous occasions gone to corporate IT and requested quite firmly that things I felt were broken should be changed. But it seemed wrong to go to corporate IT and ask them to change their published DKIM/DMARC policy to accomodate a behaviour in Mailman which I didn't agree with myself. I felt that instead I shoud put (with my Xen Project hat on) my own house in order. Getting Mailman not to modify messages So, I needed our Mailman to stop modifying the headers. I needed it to not even reformat them. A brief look at the source code to Mailman showed that this was not going to be so easy. Mailman has a lot of features whose very purpose is to modify messages. Personally, as I say, I don't much like these features. I think the subject line tags, CC list manipulations, and so on, are a nuisance and not really Proper. But they are definitely part of why Mailman has become so popular and I can definitely see why the Mailman authors have done things this way. But these features mean Mailman has to disassemble incoming messages, and then reassemble them again on output. It is very difficult to do that and still faithfully reassemble the original headers byte-for-byte in the case where nothing actually wanted to modify them. There are existing bug reports[1] [2] [3] [4]; I can see why they are still open. Rejected approach: From:-mangling This situation is hardly unique to the Xen lists. Many other have struggled with it. The best that seems to have been come up with so far is to turn on a new Mailman feature which rewrites the From: header of the messages that go through it, to contain the list's domain name instead of the originator's. I think this is really pretty nasty. It breaks normal use of email, such as reply-to-author. It is having Mailman do additional mangling of the message in order to solve the problems caused by other undesirable manglings! Solution! As you can see, I asked myself: I want Mailman not modify messages at all; how can I get it to do that? Given the existing structure of Mailman - with a lot of message-modifying functionality - that would really mean adding a bypass mode. It would have to spot, presumably depending on config settings, that messages were not to be edited; and then, it would avoid disassembling and reassembling the message at at all, and bypass the message modification stages. The message would still have to be parsed of course - it's just that the copy send out ought to be pretty much the incoming message. When I put it to myself like that I had a thought: couldn't I implement this outside Mailman? What if I took a copy of every incoming message, and then post-process Mailman's output to restore the original? It turns out that this is quite easy and works rather well! outflank-mailman outflank-mailman is a 233-line script, plus documentation, installation instructions, etc. It is designed to run from your MTA, on all messages going into, and coming from, Mailman. On input, it saves a copy of the message in a sqlite database, and leaves a note in a new Outflank-Mailman-Id header. On output, it does some checks, finds the original message, and then combines the original incoming message with carefully-selected headers from the version that Mailman decided should be sent. This was deployed for the Xen Project lists on Tuesday morning and it seems to be working well so far. If you administer Mailman lists, and fancy some new software to address this problem, please do try it out. Matters arising - Mail filtering, DKIM Overall I think DKIM is a helpful contribution to the fight against spam (unlike SPF, which is fundamentally misdirected and also broken). Spam is an extremely serious problem; most receiving mail servers experience more attempts to deliver spam than real mail, by orders of magnitude. But DKIM is not without downsides. Inherent in the design of anything like DKIM is that arbitrary modification of messages by list servers is no longer possible. In principle it might be possible to design a system which tolerated modifications reasonable for mailing lists but it would be quite complicated and have to somehow not tolerate similar modifications in other contexts. So DKIM means that lists can no longer add those unsubscribe footers to mailing list messages. The "new way" (RFC2369, July 1998), to do this is with the List-Unsubscribe header. Hopefully a good MUA will be able to deal with unsubscription semiautomatically, and I think by now an adequate MUA should at least display these headers by default. Sender: There are implications for recipient-side filtering too. The "traditional" correct way to spot mailing list mail was to look for Resent-To:, which can be added without breaking DKIM; the "new" (RFC2919, March 2001) correct way is List-Id:, likewise fine. But during the initial deployment of outflank-mailman I discovered that many subscribers were detecting that a message was list traffic by looking at the Sender: header. I'm told that some mail systems (apparently Microsoft's included) make it inconvenient to filter on List-Id. Really, I think a mailing list ought not to be modifying Sender:. Given Sender:'s original definition and semantics, there might well be reasonable reasons for a mailing list posting to have different From: and and then the original Sender: ought not to be lost. And a mailing list's operation does not fit well into the original definition of Sender:. I suspect that list software likes to put in Sender mostly for historical reasons; notably, a long time ago it was not uncommon for broken mail systems to send bounces to the Sender: header rather than the envelope sender (SMTP MAIL FROM). DKIM makes this more of a problem. Unfortunately the DKIM specifications are vague about what headers one should sign, but they pretty much definitely include Sender: if it is present, and some materials encourage signing the absence of Sender:. The latter is Exim's default configuration when DKIM-signing is enabled. Franky there seems little excuse for systems to not readily support and encourage filtering on List-Id, 20 years later, but I don't want to make life hard for my users. For now we are running a compromise configuration: if there wasn't a Sender: in the original, take Mailman's added one. This will result in (i) misfiltering for some messages whose poster put in a Sender:, and (ii) DKIM failures for messages whose originating system signed the absence of a Sender:. I'm going to mine the db for some stats after it's been deployed for a week or so, to see which of these problems is worst and decide what to do about it. Mail routing For DKIM to work, messages being sent From: a particular mail domain must go through a system trusted by that domain, so they can be signed. Most users tend to do this anyway: their mail provider gives them an IMAP server and an authenticated SMTP submission server, and they configure those details in their MUA. The MUA has a notion of "accounts" and according to the user's selection for an outgoing message, connects to the authenticated submission service (usually using TLS over the global internet). Trad unix systems where messages are sent using the local sendmail or localhost SMTP submission (perhaps by automated systems, or perhaps by human users) are fine too. The smarthost can do the DKIM signing. But this solution is awkward for a user of a trad MUA in what I'll call "alias account" setups: where a user has an address at a mail domain belonging to different people to the system on which they run their MUA (perhaps even several such aliases for different hats). Traditionally this worked by the mail domain forwarding incoming the mail, and the user simply self-declaring their identity at the alias domain. Without DKIM there is nothing stopping anyone self-declaring their own From: line. If DKIM is to be enabled for such a user (preventing people forging mail as that user), the user will have to somehow arrange that their trad unix MUA's outbound mail stream goes via their mail alias provider. For a single-user sending unix system this can be done with tolerably complex configuration in an MTA like Exim. For shared systems this gets more awkward and might require some hairy shell scripting etc.
edited 2020-10-01 21:22 and 21:35 and -02 10:50 +0100 to fix typos and 21:28 to linkify "my small program" in the tl;dr


comment count unavailable comments

23 September 2020

Vincent Fourmond: Release 2.2 of QSoas

The new release of QSoas is finally ready ! It brings in a lot of new features and improvements, notably greatly improved memory use for massive multifits, a fit for linear (in)activation processes (the one we used in Fourmond et al, Nature Chemistry 2014), a new way to transform "numbers" like peak position or stats into new datasets and even SVG output ! Following popular demand, it also finally brings back the peak area output in the find-peaks command (and the other, related commands) ! You can browse the full list of changes there.

The new release can be downloaded from the downloads page.
Freely available binary images for QSoas 1.0In addition to the new release, we are now releasing the binary images for MacOS and Windows for the release 1.0. They are also freely available for download from the downloads page.

About QSoasQSoas is a powerful open source data analysis program that focuses on flexibility and powerful fitting capacities. It is released under the GNU General Public License. It is described in Fourmond, Anal. Chem., 2016, 88 (10), pp 5050 5052. Current version is 2.2. You can download its source code or buy precompiled versions for MacOS and Windows there.

18 September 2020

Sven Hoexter: Avoiding the GitHub WebUI

Now that GitHub released v1.0 of the gh cli tool, and this is all over HN, it might make sense to write a note about my clumsy aliases and shell functions I cobbled together in the past month. Background story is that my dayjob moved to GitHub coming from Bitbucket. From my point of view the WebUI for Bitbucket is mediocre, but the one at GitHub is just awful and painful to use, especially for PR processing. So I longed for the terminal and ended up with gh and wtfutil as a dashboard. The setup we have is painful on its own, with several orgs and repos which are more like monorepos covering several corners of infrastructure, and some which are very focused on a single component. All workflows are anti GitHub workflows, so you must have permission on the repo, create a branch in that repo as a feature branch, and open a PR for the merge back into master. gh functions and aliases
# setup a token with perms to everything, dealing with SAML is a PITA
export GITHUB_TOKEN="c0ffee4711"
# I use a light theme on my terminal, so adjust the gh theme
export GLAMOUR_STYLE="light"
#simple aliases to poke at a PR
alias gha="gh pr review --approve"
alias ghv="gh pr view"
alias ghd="gh pr diff"
### github support functions, most invoked with a PR ID as $1
#primary function to review PRs
function ghs  
    gh pr view $ 1 
    gh pr checks $ 1 
    gh pr diff $ 1 
 
# very custom PR create function relying on ORG and TEAM settings hard coded
# main idea is to create the PR with my team directly assigned as reviewer
function ghc  
    if git status   grep -q 'Untracked'; then
        echo "ERROR: untracked files in branch"
        git status
        return 1
    fi
    git push --set-upstream origin HEAD
    gh pr create -f -r "$(git remote -v   grep push   grep -oE 'myorg-[a-z]+')/myteam"
 
# merge a PR and update master if we're not in a different branch
function ghm  
    gh pr merge -d -r $ 1 
    if [[ "$(git rev-parse --abbrev-ref HEAD)" == "master" ]]; then
        git pull
    fi
 
# get an overview over the files changed in a PR
function ghf  
    gh pr diff $ 1    diffstat -l
 
# generate a link to a commit in the WebUI to pass on to someone else
# input is a git commit hash
function ghlink  
    local repo="$(git remote -v   grep -E "github.+push"   cut -d':' -f 2   cut -d'.' -f 1)"
    echo "https://github.com/$ repo /commit/$ 1 "
 
wtfutil I have a terminal covering half my screensize with small dashboards listing PRs for the repos I care about. For other repos I reverted back to mail notifications which get sorted and processed from time to time. A sample dashboard config looks like this:
github_admin:
  apiKey: "c0ffee4711"
  baseURL: ""
  customQueries:
    othersPRs:
      title: "Pull Requests"
      filter: "is:open is:pr -author:hoexter -label:dependencies"
  enabled: true
  enableStatus: true
  showOpenReviewRequests: false
  showStats: false
  position:
    top: 0
    left: 0
    height: 3
    width: 1
  refreshInterval: 30
  repositories:
    - "myorg/admin"
  uploadURL: ""
  username: "hoexter"
  type: github
The -label:dependencies is used here to filter out dependabot PRs in the dashboard. Workflow Look at a PR with ghv $ID, if it's ok ACK it with gha $ID. Create a PR from a feature branch with ghc and later on merge it with ghm $ID. The $ID is retrieved from looking at my wtfutil based dashboard. Security Considerations The world is full of bad jokes. For the WebUI access I've the full array of pain with SAML auth, which expires too often, and 2nd factor verification for my account backed by a Yubikey. But to work with the CLI you basically need an API token with full access, everything else drives you insane. So I gave in and generated exactly that. End result is that I now have an API token - which is basically a password - which has full power, and is stored in config files and environment variables. So the security features created around the login are all void now. Was that the aim of it after all?

12 September 2020

Jelmer Vernooij: Debian Janitor: All Packages Processed with Lintian-Brush

The Debian Janitor is an automated system that commits fixes for (minor) issues in Debian packages that can be fixed by software. It gradually started proposing merges in early December. The first set of changes sent out ran lintian-brush on sid packages maintained in Git. This post is part of a series about the progress of the Janitor. On 12 July 2019, the Janitor started fixing lintian issues in packages in the Debian archive. Now, a year and a half later, it has processed every one of the almost 28,000 packages at least once. Graph with Lintian Fixes Burndown As discussed two weeks ago, this has resulted in roughly 65,000 total changes. These 65,000 changes were made to a total of almost 17,000 packages. Of the remaining packages, for about 4,500 lintian-brush could not make any improvements. The rest (about 6,500) failed to be processed for one of many reasons they are e.g. not yet migrated off alioth, use uncommon formatting that can't be preserved or failed to build for one reason or another. Graph with runs by status (success, failed, nothing-to-do) Now that the entire archive has been processed, packages are prioritized based on the likelihood of a change being made to them successfully. Over the course of its existence, the Janitor has slowly gained support for a wider variety of packaging methods. For example, it can now edit the templates for some of the generated control files. Many of the packages that the janitor was unable to propose changes for the first time around are expected to be correctly handled when they are reprocessed. If you re a Debian developer, you can find the list of improvements made by the janitor in your packages by going to https://janitor.debian.net/m/.

For more information about the Janitor's lintian-fixes efforts, see the landing page.

30 August 2020

Dirk Eddelbuettel: RcppSMC 0.2.2: Small updates

A new release 0.2.2 of the RcppSMC package arrived on CRAN earlier today (and once again as a very quick pretest-publish within minutes of submission). RcppSMC provides Rcpp-based bindings to R for the Sequential Monte Carlo Template Classes (SMCTC) by Adam Johansen described in his JSS article. Sequential Monte Carlo is also referred to as Particle Filter in some contexts. This releases contains two fixes from a while back that had not been released, a CRAN-requested update plus a few more minor polishes to make it pass R CMD check --as-cran as nicely as usual.

Changes in RcppSMC version 0.2.2 (2020-08-30)
  • Package helper files .editorconfig added (Adam in #43).
  • Change const correctness and add return (Leah in #44).
  • Updates to continuous integration and R versions used (Dirk)
  • Accomodate CRAN request, other updates to CRAN Policy (Dirk in #49 fixing #48).

Courtesy of CRANberries, there is a diffstat report for this release. More information is on the RcppSMC page. Issues and bugreports should go to the GitHub issue tracker. If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

15 August 2020

Jelmer Vernooij: Debian Janitor: 8,200 landed changes landed so far

The Debian Janitor is an automated system that commits fixes for (minor) issues in Debian packages that can be fixed by software. It gradually started proposing merges in early December. The first set of changes sent out ran lintian-brush on sid packages maintained in Git. This post is part of a series about the progress of the Janitor. The bot has been submitting merge requests for about seven months now. The rollout has happened gradually across the Debian archive, and the bot is now enabled for all packages maintained on Salsa, GitLab, GitHub and Launchpad. There are currently over 1,000 open merge requests, and close to 3,400 merge requests have been merged so far. Direct pushes are enabled for a number of large Debian teams, with about 5,000 direct pushes to date. That covers about 11,000 lintian tags of varying severities (about 75 different varieties) fixed across Debian. Janitor pushes over time Janitor merges over time

For more information about the Janitor's lintian-fixes efforts, see the landing page

14 August 2020

Russell Coker: Jitsi on Debian

I ve just setup an instance of the Jitsi video-conference software for my local LUG. Here is an overview of how to set it up on Debian. Firstly create a new virtual machine to run it. Jitsi is complex and has lots of inter-dependencies. It s packages want to help you by dragging in other packages and configuring them. This is great if you have a blank slate to start with, but if you already have one component installed and running then it can break things. It wants to configure the Prosody Jabber server and a web server and my first attempt at an install failed when it tried to reconfigure the running instances of Prosody and Apache. Here s the upstream install docs [1]. They cover everything fairly well, but I ll document the configuration I wanted (basic public server with password required to create a meeting). Basic Installation The first thing to do is to get a short DNS name like j.example.com. People will type that every time they connect and will thank you for making it short. Using Certbot for certificates is best. It seems that you need them for j.example.com and auth.j.example.com.
apt install curl certbot
/usr/bin/letsencrypt certonly --standalone -d j.example.com,auth.j.example.com -m you@example.com
curl https://download.jitsi.org/jitsi-key.gpg.key   gpg --dearmor > /etc/apt/jitsi-keyring.gpg
echo "deb [signed-by=/etc/apt/jitsi-keyring.gpg] https://download.jitsi.org stable/" > /etc/apt/sources.list.d/jitsi-stable.list
apt-get update
apt-get -y install jitsi-meet
When apt installs jitsi-meet and it s dependencies you get asked many questions for configuring things. Most of it works well. If you get the nginx certificate wrong or don t have the full chain then phone clients will abort connections for no apparent reason, it seems that you need to edit /etc/nginx/sites-enabled/j.example.com.conf to use the following ssl configuration:
ssl_certificate /etc/letsencrypt/live/j.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/j.example.com/privkey.pem;
Then you have to edit /etc/prosody/conf.d/j.example.com.cfg.lua to use the following ssl configuration:
key = "/etc/letsencrypt/live/j.example.com/privkey.pem";
certificate = "/etc/letsencrypt/live/j.example.com/fullchain.pem";
It seems that you need to have an /etc/hosts entry with the public IP address of your server and the names j.example.com j auth.j.example.com . Jitsi also appears to use the names speakerstats.j.example.com conferenceduration.j.example.com lobby.j.example.com conference.j.example.com conference.j.example.com internal.auth.j.example.com but they aren t required for a basic setup, I guess you could add them to /etc/hosts to avoid the possibility of strange errors due to it not finding an internal host name. There are optional features of Jitsi which require some of these names, but so far I ve only used the basic functionality. Access Control This section describes how to restrict conference creation to authenticated users. The secure-domain document [2] shows how to restrict access, but I ll summarise the basics. Edit /etc/prosody/conf.avail/j.example.com.cfg.lua and use the following line in the main VirtualHost section:
        authentication = "internal_hashed"
Then add the following section:
VirtualHost "guest.j.example.com"
        authentication = "anonymous"
        c2s_require_encryption = false
        modules_enabled =  
            "turncredentials";
         
Edit /etc/jitsi/meet/j.example.com-config.js and add the following line:
        anonymousdomain: 'guest.j.example.com',
Edit /etc/jitsi/jicofo/sip-communicator.properties and add the following line:
org.jitsi.jicofo.auth.URL=XMPP:j.example.com
Then run commands like the following to create new users who can create rooms:
prosodyctl register admin j.example.com
Then restart most things (Prosody at least, maybe parts of Jitsi too), I rebooted the VM. Now only the accounts you created on the Prosody server will be able to create new meetings. You should be able to add, delete, and change passwords for users via prosodyctl while it s running once you have set this up. Conclusion Once I gave up on the idea of running Jitsi on the same server as anything else it wasn t particularly difficult to set up. Some bits were a little fiddly and hopefully this post will be a useful resource for people who have trouble understanding the documentation. Generally it s not difficult to install if it is the only thing running on a VM.

Next.

Previous.